142 research outputs found

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    Sound-based transportation mode recognition with smartphones

    Get PDF
    Smartphone-based identification of the mode of transportation of the user is important for context-aware services. We investigate the feasibility of recognizing the 8 most common modes of locomotion and transportation from the sound recorded by a smartphone carried by the user. We propose a convolutional neural network based recognition pipeline, which operates on the short- time Fourier transform (STFT) spectrogram of the sound in the log domain. Experiment with the Sussex-Huawei locomotion- transportation (SHL) dataset on 366 hours of data shows promising results where the proposed pipeline can recognize the activities Still, Walk, Run, Bike, Car, Bus, Train and Subway with a global accuracy of 86.6%, which is 23% higher than classical machine learning pipelines. It is shown that sound is particularly useful for distinguishing between various vehicle activities (e.g. Car vs Bus, Train vs Subway). This discriminablity is complementary to the widely used motion sensors, which are poor at distinguish between rail and road transport

    Evolutionary morphogenesis for multi-cellular systems

    Get PDF
    With a gene required for each phenotypic trait, direct genetic encodings may show poor scalability to increasing phenotype length. Developmental systems may alleviate this problem by providing more efficient indirect genotype to phenotype mappings. A novel classification of multi-cellular developmental systems in evolvable hardware is introduced. It shows a category of developmental systems that up to now has rarely been explored. We argue that this category is where most of the benefits of developmental systems lie (e.g. speed, scalability, robustness, inter-cellular and environmental interactions that allow fault-tolerance or adaptivity). This article describes a very simple genetic encoding and developmental system designed for multi-cellular circuits that belongs to this category. We refer to it as the morphogenetic system. The morphogenetic system is inspired by gene expression and cellular differentiation. It focuses on low computational requirements which allows fast execution and a compact hardware implementation. The morphogenetic system shows better scalability compared to a direct genetic encoding in the evolution of structures of differentiated cells, and its dynamics provides fault-tolerance up to high fault rates. It outperforms a direct genetic encoding when evolving spiking neural networks for pattern recognition and robot navigation. The results obtained with the morphogenetic system indicate that this "minimalist” approach to developmental systems merits further stud

    Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition

    Get PDF
    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation

    On-Body Sensing: From Gesture-Based Input to Activity-Driven Interaction

    Full text link

    Performance analysis of Routing Protocol for Low power and Lossy Networks (RPL) in large scale networks

    Get PDF
    With growing needs to better understand our environments, the Internet-of-Things (IoT) is gaining importance among information and communication technologies. IoT will enable billions of intelligent devices and networks, such as wireless sensor networks (WSNs), to be connected and integrated with computer networks. In order to support large scale networks, IETF has defined the Routing Protocol for Low power and Lossy Networks (RPL) to facilitate the multi-hop connectivity. In this paper, we provide an in-depth review of current research activities. Specifically, the large scale simulation development and performance evaluation under various objective functions and routing metrics are pioneering works in RPL study. The results are expected to serve as a reference for evaluating the effectiveness of routing solutions in large scale IoT use cases

    Modeling Service-Oriented Context Processing in Dynamic Body Area Networks

    Get PDF
    Context processing in Body Area Networks (BANs) faces unique challenges due to the user and node mobility, the need of real-time adaptation to the dynamic topological and contextual changes, and heterogeneous processing capabilities and energy constraints present on the available devices. This paper proposes a service-oriented framework for the execution of context recognition algorithms. We describe and theoretically analyze the performance of the main framework components, including the sensor network organization, service discovery, service graph construction, service distribution and mapping. The theoretical results are followed by the simulation of the proposed framework as a whole, showing the overall cost of dynamically distributing applications on the network
    • …
    corecore